388 research outputs found

    Severity Analysis of Large Truck Crashes- Comparision Between the Regression Modeling Methods with Machine Learning Methods.

    Get PDF
    According to the Texas Department of Transportation’s Texas Motor Vehicle Crash Statistics, Texas has had the highest number of severe crashes involving large trucks in the US. As defined by the US Department of Transportation, a large truck is any vehicle with a gross vehicle weight rating greater than 10,000 pounds. Generally, it requires more time and much more space for large trucks to accelerating, slowing down, and stopping. Also, there will be large blind spots when large trucks make wide turns. Therefore, if an unexpected traffic situation comes upon, It would be more difficult for large trucks to take evasive actions than regular vehicles to avoid a collision. Due to their large size and heavy weight, large truck crashes often result in huge economic and social costs. Predicting the severity level of a reported large truck crash with unknown severity or of the severity of crashes that may be expected to occur sometime in the future is useful. It can help to prevent the crash from happening or help rescue teams and hospitals provide proper medical care as fast as possible. To identify the appropriate modeling approaches for predicting the severity of large truck crash, in this research, four representative classification tree-based ML models (e.g., Extreme Gradient Boosting tree (XGBoost), Adaptive Boosting tree(AdaBoost), Random Forest (RF), Gradient Boost Decision Tree (GBDT)), two non-tree-based ML models (e.g., the Support Vector Machines (SVM), k-Nearest Neighbors (kNN)), and LR model were selected. The results indicate that the GBDT model performs best among all of seven models

    Crystal structure of diaqua-(N-(1-(pyrazin-2-yl)ethylidene)pyridin-1-ium-4-carbohydrazonate-κ3N,N′,O)-tris[nitrato-κ2O,O′)lanthanum(III), C12H15N8O12La

    Get PDF
    Abstract C12H15N8O12La, monoclinic, P21/c (no. 14), a = 8.3267(7) Å, b = 28.348(2) Å, c = 9.1097(8) Å, β = 107.5410(10)°, V = 2050.3(3) Å3, Z = 4, R gt(F) = 0.0236, wR ref(F 2) = 0.0559, T = 296(2) K

    Analysis of factors contributing to the severity of large truck crashes

    Get PDF
    Crashes that involved large trucks often result in immense human, economic, and social losses. To prevent and mitigate severe large truck crashes, factors contributing to the severity of these crashes need to be identified before appropriate countermeasures can be explored. In this research, we applied three tree‐based machine learning (ML) techniques, i.e., random forest (RF), gradient boost decision tree (GBDT), and adaptive boosting (AdaBoost), to analyze the factors contributing to the severity of large truck crashes. Besides, a mixed logit model was developed as a baseline model to compare with the factors identified by the ML models. The analysis was performed based on the crash data collected from the Texas Crash Records Information System (CRIS) from 2011 to 2015. The results of this research demonstrated that the GBDT model outperforms other ML methods in terms of its prediction accuracy and its capability in identifying more contributing factors that were also identified by the mixed logit model as significant factors. Besides, the GBDT method can effectively identify both categorical and numerical factors, and the directions and magnitudes of the impacts of the factors identified by the GBDT model are all reasonable and explainable. Among the identified factors, driving under the influence of drugs, alcohol, and fatigue are the most important factors contributing to the severity of large truck crashes. In addition, the exists of curbs and medians and lanes and shoulders with sufficient width can prevent severe large truck crashes

    Hydrological Drought Forecasting and Assessment Based on the Standardized Stream Index in the Southwest China

    Get PDF
    AbstractSouthwest China is abundant of rainfall and water resources, however, severe and extremely droughts hits it more frequently in recent years, caused huge loss of human lives and financial damages. To investigate the feasibility of the standardized stream index in Southwest China, the Nanpanjiang River basin above the Xiaolongtan hydrological station was selected as the case study site. Based on long-term daily hydrological and meteorological data series, the generated runoff was simulated by the daily Xinanjiang model, then the standardized stream index was calculated and its feasibility was explored by comparing it with other two hydrological drought index. The result revealed that the standardized stream index performed well in detecting the onset, severity and duration in 2009/2010 extremely drought. The output of the paper could provide valuable references for the regional and national drought monitoring and forecasting systems

    On incremental global update support in cooperative database systems

    Get PDF
    OzGateway is a cooperative database system designed for integrating heterogeneous existing information systems into an interoperable environment. It also aims to provide a gatewway for legacy information system migration. This paper summarises the problems and results of multidatabase transaction management research. In supporting global updates in OzGateway in an evolutionary way, we introduce a classification of multidatabase transactions and discuss the problems in each category. The architecture of OzGateway and the design of the global transaction manager and servers are presented

    Plug-and-Play Algorithms for Video Snapshot Compressive Imaging

    Full text link
    We consider the reconstruction problem of video snapshot compressive imaging (SCI), which captures high-speed videos using a low-speed 2D sensor (detector). The underlying principle of SCI is to modulate sequential high-speed frames with different masks and then these encoded frames are integrated into a snapshot on the sensor and thus the sensor can be of low-speed. On one hand, video SCI enjoys the advantages of low-bandwidth, low-power and low-cost. On the other hand, applying SCI to large-scale problems (HD or UHD videos) in our daily life is still challenging and one of the bottlenecks lies in the reconstruction algorithm. Exiting algorithms are either too slow (iterative optimization algorithms) or not flexible to the encoding process (deep learning based end-to-end networks). In this paper, we develop fast and flexible algorithms for SCI based on the plug-and-play (PnP) framework. In addition to the PnP-ADMM method, we further propose the PnP-GAP (generalized alternating projection) algorithm with a lower computational workload. We first employ the image deep denoising priors to show that PnP can recover a UHD color video with 30 frames from a snapshot measurement. Since videos have strong temporal correlation, by employing the video deep denoising priors, we achieve a significant improvement in the results. Furthermore, we extend the proposed PnP algorithms to the color SCI system using mosaic sensors, where each pixel only captures the red, green or blue channels. A joint reconstruction and demosaicing paradigm is developed for flexible and high quality reconstruction of color video SCI systems. Extensive results on both simulation and real datasets verify the superiority of our proposed algorithm.Comment: 18 pages, 12 figures and 4 tables. Journal extension of arXiv:2003.13654. Code available at https://github.com/liuyang12/PnP-SCI_pytho
    corecore